Explore First, Exploit Next: The True Shape of Regret in Bandit Problems
نویسندگان
چکیده
We revisit lower bounds on the regret in the case of multi-armed bandit problems. We obtain nonasymptotic bounds and provide straightforward proofs based only on well-known properties of Kullback-Leibler divergences. These bounds show that in an initial phase the regret grows almost linearly, and that the well-known logarithmic growth of the regret only holds in a final phase. The proof techniques come to the essence of the arguments used and they are deprived of all unnecessary complications.
منابع مشابه
Lipschitz Bandits: Regret Lower Bound and Optimal Algorithms
We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitz function of the arm, and where the set of arms is either discrete or continuous. For discrete Lipschitz bandits, we derive asymptotic problem specific lower bounds for the regret satisfied by any algorithm, and propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitz structure of t...
متن کاملStochastic and Adversarial Combinatorial Bandits
This paper investigates stochastic and adversarial combinatorial multi-armed bandit problems. In the stochastic setting, we first derive problemspecific regret lower bounds, and analyze how these bounds scale with the dimension of the decision space. We then propose COMBUCB, algorithms that efficiently exploit the combinatorial structure of the problem, and derive finitetime upper bound on thei...
متن کاملContextual Multi-armed Bandits under Feature Uncertainty
We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined NLinRel, having O ( T 7 8 ( log (dT ) +K √ d )) regret bound for T rounds, K actions, and d-dimensional feature vectors. Next, for the case of non-identical noise, we observe that...
متن کاملStochastic Contextual Bandits with Known Reward Functions
Many sequential decision-making problems in communication networks such as power allocation in energy harvesting communications, mobile computational offloading, and dynamic channel selection can be modeled as contextual bandit problems which are natural extensions of the well-known multi-armed bandit problem. In these problems, each resource allocation or selection decision can make use of ava...
متن کاملEfficient Contextual Bandits in Non-stationary Worlds
Most contextual bandit algorithms minimize regret to the best fixed policy–a questionable benchmark for non-stationary environments ubiquitous in applications. In this work, we obtain efficient contextual bandit algorithms with strong guarantees for alternate notions of regret suited to these non-stationary environments. Two of our algorithms equip existing methods for i.i.d problems with sophi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1602.07182 شماره
صفحات -
تاریخ انتشار 2016